An efficient improvement of the Newton method for solving nonconvex optimization problems

Authors

  • Mohammad Heydari Department of Mathematics, Yazd University, P. O. Box 89195-74, Yazd, Iran
Abstract:

‎Newton method is one of the most famous numerical methods among the line search‎ ‎methods to minimize functions. ‎It is well known that the search direction and step length play important roles ‎in this class of methods to solve optimization problems. ‎In this investigation‎, ‎a new modification of the Newton method to solve ‎unconstrained optimization problems is presented‎. ‎The significant merit of the proposed method is that ‎the step length $alpha_k$ at each iteration is equal to 1‎. ‎ Additionally, the convergence analysis for this iterative algorithm‎ ‎is established under suitable conditions‎. ‎Some illustrative examples are provided to show the validity and applicability of‎ ‎the presented method and a comparison is made with several other existing methods‎.

Upgrade to premium to download articles

Sign up to access the full text

Already have an account?login

similar resources

An Efficient Neurodynamic Scheme for Solving a Class of Nonconvex Nonlinear Optimization Problems

‎By p-power (or partial p-power) transformation‎, ‎the Lagrangian function in nonconvex optimization problem becomes locally convex‎. ‎In this paper‎, ‎we present a neural network based on an NCP function for solving the nonconvex optimization problem‎. An important feature of this neural network is the one-to-one correspondence between its equilibria and KKT points of the nonconvex optimizatio...

full text

Quasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization

Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.

full text

An inexact Newton method for nonconvex equality constrained optimization

We present a matrix-free line search algorithm for large-scale equality constrained optimization that allows for inexact step computations. For strictly convex problems, the method reduces to the inexact sequential quadratic programming approach proposed by Byrd et al. [SIAM J. Optim. 19(1) 351–369, 2008]. For nonconvex problems, the methodology developed in this paper allows for the presence o...

full text

An efficient approach for solving layout problems

This paper offers an approach that could be useful for diverse types of layout problems or even area allocation problems. By this approach there is no need to large number of discrete variables and only by few continues variables large-scale layout problems could be solved in polynomial time. This is resulted from dividing area into discrete and continuous dimensions. Also defining decision var...

full text

An Efficient Optimization Method for Solving Unsupervised Data Classification Problems

Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has ...

full text

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

full text

My Resources

Save resource for easier access later

Save to my library Already added to my library

{@ msg_add @}


Journal title

volume 7  issue 1

pages  69- 85

publication date 2019-01-01

By following a journal you will be notified via email when a new issue of this journal is published.

Hosted on Doprax cloud platform doprax.com

copyright © 2015-2023